feedforward layer
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention
Despite many recent works on Mixture of Experts (MoEs) for resource-efficient Transformer language models, existing methods mostly focus on MoEs for feedforward layers. Previous attempts at extending MoE to the self-attention layer fail to match the performance of the parameter-matched baseline. Our novel SwitchHead is an effective MoE method for the attention layer that successfully reduces both the compute and memory requirements, achieving wall-clock speedup, while matching the language modeling performance of the baseline Transformer. Our novel MoE mechanism allows SwitchHead to compute up to 8 times fewer attention matrices than the standard Transformer. SwitchHead can also be combined with MoE feedforward layers, resulting in fully-MoE SwitchAll Transformers. For our 262M parameter model trained on C4, SwitchHead matches the perplexity of standard models with only 44% compute and 27% memory usage. Zero-shot experiments on downstream tasks confirm the performance of SwitchHead, e.g., achieving more than 3.5% absolute improvements on BliMP compared to the baseline with an equal compute resource.
A unified framework for establishing the universal approximation of transformer-type architectures
Cheng, Jingpu, Lin, Ting, Shen, Zuowei, Li, Qianxiao
We investigate the universal approximation property (UAP) of transformer-type architectures, providing a unified theoretical framework that extends prior results on residual networks to models incorporating attention mechanisms. Our work identifies token distinguishability as a fundamental requirement for UAP and introduces a general sufficient condition that applies to a broad class of architectures. Leveraging an analyticity assumption on the attention layer, we can significantly simplify the verification of this condition, providing a non-constructive approach in establishing UAP for such architectures. We demonstrate the applicability of our framework by proving UAP for transformers with various attention mechanisms, including kernel-based and sparse attention mechanisms. The corollaries of our results either generalize prior works or establish UAP for architectures not previously covered. Furthermore, our framework offers a principled foundation for designing novel transformer architectures with inherent UAP guarantees, including those with specific functional symmetries. We propose examples to illustrate these insights.
- Asia > Singapore (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
A Bayesian Hybrid Parameter-Efficient Fine-Tuning Method for Large Language Models
Chai, Yidong, Liu, Yang, Zhou, Yonghang, Xie, Jiaheng, Zeng, Daniel Dajun
Large Language Models (LLMs) have demonstrated transformative potential in reshaping the world. As these models are pretrained on general corpora, they often require domain-specific fine-tuning to optimize performance in specialized business applications. Due to their massive scale, parameter-efficient fine-tuning (PEFT) methods are widely used to reduce training costs. Among them, hybrid PEFT methods that combine multiple PEFT techniques have achieved the best performance. However, existing hybrid PEFT methods face two main challenges when fine-tuning LLMs for specialized applications: (1) relying on point estimates, lacking the ability to quantify uncertainty for reliable decision-making, and (2) struggling to dynamically adapt to emerging data, lacking the ability to suit real-world situations. We propose Bayesian Hybrid Parameter-Efficient Fine-Tuning (BH-PEFT), a novel method that integrates Bayesian learning into hybrid PEFT. BH-PEFT combines Adapter, LoRA, and prefix-tuning to fine-tune feedforward and attention layers of the Transformer. By modeling learnable parameters as distributions, BH-PEFT enables uncertainty quantification. We further propose a Bayesian dynamic fine-tuning approach where the last posterior serves as the prior for the next round, enabling effective adaptation to new data. We evaluated BH-PEFT on business tasks such as sentiment analysis, news categorization, and commonsense reasoning. Results show that our method outperforms existing PEFT baselines, enables uncertainty quantification for more reliable decisions, and improves adaptability in dynamic scenarios. This work contributes to business analytics and data science by proposing a novel BH-PEFT method and dynamic fine-tuning approach that support uncertainty-aware and adaptive decision-making in real-world situations.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Delaware > New Castle County > Newark (0.14)
- Europe > Austria > Vienna (0.14)
- (6 more...)
- Health & Medicine (1.00)
- Leisure & Entertainment (0.67)
- Banking & Finance > Trading (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.89)
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention
Despite many recent works on Mixture of Experts (MoEs) for resource-efficient Transformer language models, existing methods mostly focus on MoEs for feedforward layers. Previous attempts at extending MoE to the self-attention layer fail to match the performance of the parameter-matched baseline. Our novel SwitchHead is an effective MoE method for the attention layer that successfully reduces both the compute and memory requirements, achieving wall-clock speedup, while matching the language modeling performance of the baseline Transformer. Our novel MoE mechanism allows SwitchHead to compute up to 8 times fewer attention matrices than the standard Transformer. SwitchHead can also be combined with MoE feedforward layers, resulting in fully-MoE "SwitchAll" Transformers.
Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective
Jiao, Yuling, Lai, Yanming, Wang, Yang, Yan, Bokai
The Transformer model is widely used in various application areas of machine learning, such as natural language processing. This paper investigates the approximation of the Hölder continuous function class $\mathcal{H}_{Q}^β\left([0,1]^{d\times n},\mathbb{R}^{d\times n}\right)$ by Transformers and constructs several Transformers that can overcome the curse of dimensionality. These Transformers consist of one self-attention layer with one head and the softmax function as the activation function, along with several feedforward layers. For example, to achieve an approximation accuracy of $ε$, if the activation functions of the feedforward layers in the Transformer are ReLU and floor, only $\mathcal{O}\left(\log\frac{1}ε\right)$ layers of feedforward layers are needed, with widths of these layers not exceeding $\mathcal{O}\left(\frac{1}{ε^{2/β}}\log\frac{1}ε\right)$. If other activation functions are allowed in the feedforward layers, the width of the feedforward layers can be further reduced to a constant. These results demonstrate that Transformers have a strong expressive capability. The construction in this paper is based on the Kolmogorov-Arnold Representation Theorem and does not require the concept of contextual mapping, hence our proof is more intuitively clear compared to previous Transformer approximation works. Additionally, the translation technique proposed in this paper helps to apply the previous approximation results of feedforward neural networks to Transformer research.
- Asia > China > Hubei Province > Wuhan (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Asia > China > Hong Kong > Kowloon (0.04)
The Asymptotic Behavior of Attention in Transformers
Abella, Álvaro Rodríguez, Silvestre, João Pedro, Tabuada, Paulo
A key component of transformers is the attention mechanism orchestrating how each token influences the propagation of every other token through a transformer. In this paper we provide a rigorous, mathematical analysis of the asymptotic properties of attention in transformers. Although we present several results based on different assumptions, all of them point to the same conclusion, all tokens asymptotically converge to each other, a phenomenon that has been empirically reported in the literature. Our findings are carefully compared with existing theoretical results and illustrated by simulations and experimental studies using the GPT-2 model.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Asia > Middle East > Jordan (0.04)
Property Neurons in Self-Supervised Speech Transformers
Lin, Tzu-Quan, Lin, Guan-Ting, Lee, Hung-yi, Tang, Hao
There have been many studies on analyzing self-supervised speech Transformers, in particular, with layer-wise analysis. It is, however, desirable to have an approach that can pinpoint exactly a subset of neurons that is responsible for a particular property of speech, being amenable to model pruning and model editing. In this work, we identify a set of property neurons in the feedforward layers of Transformers to study how speech-related properties, such as phones, gender, and pitch, are stored. When removing neurons of a particular property (a simple form of model editing), the respective downstream performance significantly degrades, showing the importance of the property neurons. We apply this approach to pruning the feedforward layers in Transformers, where most of the model parameters are. We show that protecting property neurons during pruning is significantly more effective than norm-based pruning.
- Asia > Taiwan (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
Adversarially Balanced Representation for Continuous Treatment Effect Estimation
Kazemi, Amirreza, Ester, Martin
Individual treatment effect (ITE) estimation requires adjusting for the covariate shift between populations with different treatments, and deep representation learning has shown great promise in learning a balanced representation of covariates. However the existing methods mostly consider the scenario of binary treatments. In this paper, we consider the more practical and challenging scenario in which the treatment is a continuous variable (e.g. dosage of a medication), and we address the two main challenges of this setup. We propose the adversarial counterfactual regression network (ACFR) that adversarially minimizes the representation imbalance in terms of KL divergence, and also maintains the impact of the treatment value on the outcome prediction by leveraging an attention mechanism. Theoretically we demonstrate that ACFR objective function is grounded in an upper bound on counterfactual outcome prediction error. Our experimental evaluation on semi-synthetic datasets demonstrates the empirical superiority of ACFR over a range of state-of-the-art methods.
- North America > United States > New York > New York County > New York City (0.14)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Modeling & Simulation (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
Fast Feedforward Networks
Belcak, Peter, Wattenhofer, Roger
We break the linear link between the layer size and its inference cost by introducing the fast feedforward (FFF) architecture, a log-time alternative to feedforward networks. We demonstrate that FFFs are up to 220x faster than feedforward networks, up to 6x faster than mixture-of-experts networks, and exhibit better training properties than mixtures of experts thanks to noiseless conditional execution. Pushing FFFs to the limit, we show that they can use as little as 1% of layer neurons for inference in vision transformers while preserving 94.2% of predictive performance.
NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Li, Haoxin, Keung, Phillip, Cheng, Daniel, Kasai, Jungo, Smith, Noah A.
Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked language model pretraining by more than $2\times$. NarrowBERT sparsifies the transformer model such that the self-attention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as $3.5\times$ with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance.